Ensemble learning via feature selection and multiple transformed subsets: Application to image classification

نویسندگان

چکیده

In the machine learning field, especially in classification tasks, model’s design and construction are very important. Constructing model via a limited set of features may sometimes bound performance lead to non-optimal performances that some algorithms can provide. To this end, Ensemble methods were proposed literature. These methods’ main goal is learn models provide or predictions whose joint use could better than obtained by single model. paper, we propose three variants new efficient ensemble approach was able enhance linear discriminant embedding method. As case study consider “Inter-class sparsity discriminative least square regression” We seek estimation an enhanced data representation. Instead deploying multiple classifiers on top transformed features, target extracted feature subsets learned embeddings. associated with ranked original features. Multiple used for estimating transformations. The derived concatenated form representation vector process. Many factors studied investigated paper including (Parameter combinations, number models, different training percentages, selection etc.). Our has been benchmarked image datasets various sizes types (faces, objects scenes). scheme achieved competitive four face (Extended Yale B, LFW-a, Gorgia FEI) as well COIL20 object dataset Outdoor Scene dataset. measured our schemes comparison (the ICS_DLSR, RDA_GD, RSLDA, PCE, LDE, LDA, SVM KNN algorithm) conducted experiments showed manner compared single-model based outperform its competing methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Image alignment via kernelized feature learning

Machine learning is an application of artificial intelligence that is able to automatically learn and improve from experience without being explicitly programmed. The primary assumption for most of the machine learning algorithms is that the training set (source domain) and the test set (target domain) follow from the same probability distribution. However, in most of the real-world application...

متن کامل

Ensemble Feature Selection: Consistent Descriptor Subsets for Multiple QSAR Models

Selecting a small subset of descriptors from a large pool to build a predictive quantitative structure-activity relationship (QSAR) model is an important step in the QSAR modeling process. In general, subset selection is very hard to solve, even approximately, with guaranteed performance bounds. Traditional approaches employ deterministic or stochastic methods to obtain a descriptor subset that...

متن کامل

Ensemble Classification and Extended Feature Selection for Credit Card Fraud Detection

Due to the rise of technology, the possibility of fraud in different areas such as banking has been increased. Credit card fraud is a crucial problem in banking and its danger is over increasing. This paper proposes an advanced data mining method, considering both feature selection and decision cost for accuracy enhancement of credit card fraud detection. After selecting the best and most effec...

متن کامل

Nearest neighbor classification from multiple feature subsets

Combining multiple classiiers is an eeective technique for improving accuracy. There are many general combining algorithms, such as Bagging, Boosting, or Error Correcting Output Coding, that signiicantly improve classiiers like decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classiier. In this paper, we present MFS, a...

متن کامل

Un-supervised feature learning and its application in image classification

1) Feed forward neural network: Suppose we have a training set containing M samples {xn ∈ R1 ,yn ∈ RL}, n = 1, ...,M . xn and yn may or may not be the same. We represent each xn using N1 neurons as shown in Figure 2 layer L1. Then we construct layer L2 which consists of N2 neurons, representing an ∈ R2 . At each layer, we add one more neuron which has “1” in the circle as bias term. The value o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Soft Computing

سال: 2021

ISSN: ['1568-4946', '1872-9681']

DOI: https://doi.org/10.1016/j.asoc.2021.108006